Newton - Type Methods for Stochastic

نویسنده

  • X. Chen
چکیده

Stochastic programming is concerned with practical procedures for decision-making under uncertainty , by modelling uncertainties and risks associated with decisions in a form suitable for optimization. The eld is developing rapidly with contributions from many disciplines such as operations research, probability and statistics, and economics. A stochastic linear program with recourse can equivalently be formulated as a convex programming problem. The problem is often large-scale as the objective function involves an expectation, either over a discrete set of scenarios or as a multidimensional integral. Moreover, the objective function is possibly nondiierentiable. This paper provides a brief overview of recent developments on smooth approximation techniques and Newton-type methods for solving two-stage stochastic linear programs with recourse, and parallel implementation of these methods. A simple numerical example is used to signal the potential of smoothing approaches.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Newton and Quasi-Newton Methods for Large Linear Least-squares Problems

We describe stochastic Newton and stochastic quasi-Newton approaches to efficiently solve large linear least-squares problems where the very large data sets present a significant computational burden (e.g., the size may exceed computer memory or data are collected in real-time). In our proposed framework, stochasticity is introduced in two different frameworks as a means to overcome these compu...

متن کامل

Stability of two classes of improved backward Euler methods for stochastic delay differential equations of neutral type

This paper examines stability analysis of two classes of improved backward Euler methods, namely split-step $(theta, lambda)$-backward Euler (SSBE) and semi-implicit $(theta,lambda)$-Euler (SIE) methods, for nonlinear neutral stochastic delay differential equations (NSDDEs). It is proved that the SSBE method with $theta, lambdain(0,1]$ can recover the exponential mean-square stability with some...

متن کامل

A Variance Reduced Stochastic Newton Method

Quasi-Newton methods are widely used in practise for convex loss minimization problems. These methods exhibit good empirical performance on a wide variety of tasks and enjoy super-linear convergence to the optimal solution. For largescale learning problems, stochastic Quasi-Newton methods have been recently proposed. However, these typically only achieve sub-linear convergence rates and have no...

متن کامل

Speeding-Up Convergence via Sequential Subspace Optimization: Current State and Future Directions

This is an overview paper written in style of research proposal. In recent years we introduced a general framework for large-scale unconstrained optimization – Sequential Subspace Optimization (SESOP) and demonstrated its usefulness for sparsity-based signal/image denoising, deconvolution, compressive sensing, computed tomography, diffraction imaging, support vector machines. We explored its co...

متن کامل

A New High Order Closed Newton-Cotes Trigonometrically-fitted Formulae for the Numerical Solution of the Schrodinger Equation

In this paper, we investigate the connection between closed Newton-Cotes formulae, trigonometrically-fitted methods, symplectic integrators and efficient integration of the Schr¨odinger equation. The study of multistep symplectic integrators is very poor although in the last decades several one step symplectic integrators have been produced based on symplectic geometry (see the relevant lit...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007